9 research outputs found

    Logic Programs as Declarative and Procedural Bias in Inductive Logic Programming

    Get PDF
    Machine Learning is necessary for the development of Artificial Intelligence, as pointed out by Turing in his 1950 article ``Computing Machinery and Intelligence''. It is in the same article that Turing suggested the use of computational logic and background knowledge for learning. This thesis follows a logic-based machine learning approach called Inductive Logic Programming (ILP), which is advantageous over other machine learning approaches in terms of relational learning and utilising background knowledge. ILP uses logic programs as a uniform representation for hypothesis, background knowledge and examples, but its declarative bias is usually encoded using metalogical statements. This thesis advocates the use of logic programs to represent declarative and procedural bias, which results in a framework of single-language representation. We show in this thesis that using a logic program called the top theory as declarative bias leads to a sound and complete multi-clause learning system MC-TopLog. It overcomes the entailment-incompleteness of Progol, thus outperforms Progol in terms of predictive accuracies on learning grammars and strategies for playing Nim game. MC-TopLog has been applied to two real-world applications funded by Syngenta, which is an agriculture company. A higher-order extension on top theories results in meta-interpreters, which allow the introduction of new predicate symbols. Thus the resulting ILP system Metagol can do predicate invention, which is an intrinsically higher-order logic operation. Metagol also leverages the procedural semantic of Prolog to encode procedural bias, so that it can outperform both its ASP version and ILP systems without an equivalent procedural bias in terms of efficiency and accuracy. This is demonstrated by the experiments on learning Regular, Context-free and Natural grammars. Metagol is also applied to non-grammar learning tasks involving recursion and predicate invention, such as learning a definition of staircases and robot strategy learning. Both MC-TopLog and Metagol are based on a ⊤\top-directed framework, which is different from other multi-clause learning systems based on Inverse Entailment, such as CF-Induction, XHAIL and IMPARO. Compared to another ⊤\top-directed multi-clause learning system TAL, Metagol allows the explicit form of higher-order assumption to be encoded in the form of meta-rules.Open Acces

    Meta-interpretive learning of higher-order dyadic datalog: predicate invention revisited

    Full text link
    Since the late 1990s predicate invention has been under-explored within inductive logic programming due to difficulties in formulating efficient search mechanisms. However, a recent paper demonstrated that both predicate invention and the learning of recursion can be efficiently implemented for regular and context-free grammars, by way of metalogical substitutions with respect to a modified Prolog meta-interpreter which acts as the learning engine. New predicate symbols are introduced as constants representing existentially quantified higher-order variables. The approach demonstrates that predicate invention can be treated as a form of higher-order logical reasoning. In this paper we generalise the approach of meta-interpretive learning (MIL) to that of learning higher-order dyadic datalog programs. We show that with an infinite signature the higher-order dyadic datalog classH22H^2_2H22has universal Turing expressivity thoughH22H^2_2H22is decidable given a finite signature. Additionally we show that Knuth–Bendix ordering of the hypothesis space together with logarithmic clause bounding allows our MIL implementation MetagolD_{D}Dto PAC-learn minimal cardinalityH22H^2_2H22definitions. This result is consistent with our experiments which indicate that MetagolD_{D}Defficiently learns compactH22H^2_2H22definitions involving predicate invention for learning robotic strategies, the East–West train challenge and NELL. Additionally higher-order concepts were learned in the NELL language learning domain. The Metagol code and datasets described in this paper have been made publicly available on a website to allow reproduction of results in this paper

    Meta-Interpretive Learning of Higher-Order Dyadic Datalog: Predicate Invention revisited

    No full text
    In recent years Predicate Invention has been underexplored within Inductive Logic Programming due to difficulties in formulating efficient search mechanisms. However, a recent paper demonstrated that both predicate invention and the learning of recursion can be efficiently implemented for regular and context-free grammars, by way of abduction with respect to a meta-interpreter. New predicate symbols are introduced as constants representing existentially quantified higher-order variables. In this paper we generalise the approach of Meta-Interpretive Learning (MIL) to that of learning higher-order dyadic datalog programs. We show that with an infinite signature the higher-order dyadic datalog class H 2 2 has universal Turing expressivity though H 2 2 is decidable given a finite signature. Additionally we show that Knuth-Bendix ordering of the hypothesis space together with logarithmic clause bounding allows our Dyadic MIL implementation Metagol D to PAC-learn minimal cardinailty H 2 2 definitions. This result is consistent with our experiments which indicate that MetagolD efficiently learns compact H2 2 definitions involving predicate invention for robotic strategies and higher-order concepts in the NELL language learning domain

    MC-Toplog: Complete multi-clause learning guided by a top theory

    No full text
    Abstract. Within ILP much effort has been put into designing methods that are complete for hypothesis finding. However, it is not clear whether completeness is important in real-world applications. This paper uses a simplified version of grammar learning to show how a complete method can improve on the learning results of an incomplete method. Seeing the necessity of having a complete method for real-world applications, we introduce a method called ⊤-directed theory co-derivation, which is sound and complete for deriving a hypothesis within the declarative bias. The proposed method has been implemented in the ILP system MC-TopLog and tested on grammar learning and the learning of game strategies. Compared to Progol5, an efficient but incomplete ILP system, MC-TopLog has higher predictive accuracies, especially when the background knowledge is severely incomplete.

    MetaBayes: Bayesian Meta-Interpretative Learning using Higher-Order Stochastic Refinement

    No full text
    Abstract. Recent papers have demonstrated that both predicate invention and the learning of recursion can be efficiently implemented by way of abduction with respect to a meta-interpreter. This paper shows how Meta-Interpretive Learning (MIL) can be extended to implement a Bayesian posterior distribution over the hypothesis space by treating the meta-interpreter as a Stochastic Logic Program. The resulting MetaBayes system uses stochastic refinement to randomly sample consistent hypotheses which are used to approximate Bayes ’ Prediction. Most approaches to Statistical Relational Learning involve separate phases of model estimation and parameter estimation. We show how a variant of the MetaBayes approach can be used to carry out simultaneous model and parameter estimation for a new representation we refer to as a Super-imposed Logic Program (SiLPs). The implementation of this approach is referred to as MetaBayesSiLP. SiLPs are a particular form of ProbLog program, and so the parameters can also be estimated using the more traditional EM approach employed by ProbLog. This second approach is implemented in a new system called MilProbLog. Experiments are conducted on learning grammars, family relations and a natural language domain. These demonstrate that MetaBayes outperforms MetaBayesMAP in terms of predictive accuracy and also outperforms both MilProbLog and MetaBayesSiLP on log likelihood measures. However, MetaBayes incurs substantially higher running times than MetaBayesMAP. On the other hand, MetaBayes and MetaBayesSiLP have similar running times while both have much shorter running times than MilProbLog.

    Meta-interpretive learning of higher-order dyadic datalog: predicate invention revisited

    No full text
    Since the late 1990s predicate invention has been under-explored within inductive logic programming due to difficulties in formulating efficient search mechanisms. However, a recent paper demonstrated that both predicate invention and the learning of recursion can be efficiently implemented for regular and context-free grammars, by way of metalogical substitutions with respect to a modified Prolog meta-interpreter which acts as the learning engine. New predicate symbols are introduced as constants representing existentially quantified higher-order variables. The approach demonstrates that predicate invention can be treated as a form of higher-order logical reasoning. In this paper we generalise the approach of meta-interpretive learning (MIL) to that of learning higher-order dyadic datalog programs. We show that with an infinite signature the higher-order dyadic datalog classH22H^2_2H22has universal Turing expressivity thoughH22H^2_2H22is decidable given a finite signature. Additionally we show that Knuth–Bendix ordering of the hypothesis space together with logarithmic clause bounding allows our MIL implementation MetagolD_{D}Dto PAC-learn minimal cardinalityH22H^2_2H22definitions. This result is consistent with our experiments which indicate that MetagolD_{D}Defficiently learns compactH22H^2_2H22definitions involving predicate invention for learning robotic strategies, the East–West train challenge and NELL. Additionally higher-order concepts were learned in the NELL language learning domain. The Metagol code and datasets described in this paper have been made publicly available on a website to allow reproduction of results in this paper

    Meta-interpretive learning: application to grammatical inference

    No full text
    Despite early interest Predicate Invention has lately been under-explored within ILP. We develop a framework in which predicate invention and recursive generalisations are implemented using abduction with respect to a meta-interpreter. The approach is based on a previously unexplored case of Inverse Entailment for Grammatical Inference of Regular languages. Every abduced grammar H is represented by a conjunction of existentially quantified atomic formulae. Thus ¬H is a universally quantified clause representing a denial. The hypothesis space of solutions for ¬H can be ordered by θ-subsumption. We show that the representation can be mapped to a fragment of Higher-Order Datalog in which atomic formulae in H are projections of first-order definite clause grammar rules and the existentially quantified variables are projections of first-order predicate symbols. This allows predicate invention to be effected by the introduction of first-order variables. This application of abduction to conduct predicate invention is related to that of previous work by Inoue and Furukawa. We show that the approach is sufficiently flexible to support learning of Context-Free grammars from positive and negative example, a problem shown to be theoretically possible by E.M. Gold in the 1960s, though to the authors’ knowledge, not previously demonstrated within Grammatical Inference. We describe the implementation of Metagol R and Metagol CF for learning Regular and Context-Free grammars respectively. Experiments indicate that o

    Bias reformulation for one-shot function induction

    No full text
    In recent years predicate invention has been underexplored as a bias reformulation mechanism within Inductive Logic Programming due to difficulties in formulating efficient search mechanisms. However, recent papers on a new approach called Meta-Interpretive Learning have demonstrated that both predicate invention and learning recursive predicates can be efficiently implemented for various fragments of definite clause logic using a form of abduction within a meta-interpreter. This paper explores the effect of bias reformulation produced by Meta-Interpretive Learning on a series of Program Induction tasks involving string transformations. These tasks have real-world applications in the use of spreadsheet technology. The existing implementation of program induction in Microsoft's FlashFill (part of Excel 2013) already has strong performance on this problem, and performs one-shot learning, in which a simple transformation program is generated from a single example instance and applied to the remainder of the column in a spreadsheet. However, no existing technique has been demonstrated to improve learning performance over a series of tasks in the way humans do. In this paper we show how a functional variant of the recently developed MetagolD system can be applied to this task. In experiments we study a regime of layered bias reformulation in which size-bounds of hypotheses are successively relaxed in each layer and learned programs re-use invented predicates from previous layers. Results indicate that this approach leads to consistent speed increases in learning, more compact definitions and consistently higher predictive accuracy over successive layers. Comparison to both FlashFill and human performance indicates that the new system, MetagolDF, has performance approaching the skill level of both an existing commercial system and that of humans on one-shot learning over the same tasks. The induced programs are relatively easily read and understood by a human programmer.National Science Foundation (U.S.) (STC Center for Brains, Minds and Machines Award CCF-1231216
    corecore